PAC-Bayesian Analysis of Martingales and Multiarmed Bandits

نویسندگان

  • Yevgeny Seldin
  • François Laviolette
  • John Shawe-Taylor
  • Jan Peters
  • Peter Auer
چکیده

We present two alternative ways to apply PAC-Bayesian analysis to sequences of dependent random variables. The first is based on a new lemma that enables to bound expectations of convex functions of certain dependent random variables by expectations of the same functions of independent Bernoulli random variables. This lemma provides an alternative tool to Hoeffding-Azuma inequality to bound concentration of martingale values. Our second approach is based on integration of Hoeffding-Azuma inequality with PAC-Bayesian analysis. We also introduce a way to apply PAC-Bayesian analysis in situation of limited feedback. We combine the new tools to derive PAC-Bayesian generalization and regret bounds for the multiarmed bandit problem. Although our regret bound is not yet as tight as state-of-the-art regret bounds based on other well-established techniques, our results significantly expand the range of potential applications of PAC-Bayesian analysis and introduce a new analysis tool to reinforcement learning and many other fields, where martingales and limited feedback are encountered.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

PAC-Bayes-Bernstein Inequality for Martingales and its Application to Multiarmed Bandits

We combine PAC-Bayesian analysis with a Bernstein-type inequality for martingales to obtain a result that makes it possible to control the concentration of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. We apply this result to derive a regret bound for the multiarmed bandit problem. Our result forms a basis for integrative simultaneous analysis of e...

متن کامل

PAC-Bayesian Analysis of Contextual Bandits

We derive an instantaneous (per-round) data-dependent regret bound for stochastic multiarmed bandits with side information (also known as contextual bandits). The scaling of our regret bound with the number of states (contexts) N goes as

متن کامل

PAC-Bayesian Analysis of the Exploration-Exploitation Trade-off

We develop a coherent framework for integrative simultaneous analysis of the explorationexploitation and model order selection tradeoffs. We improve over our preceding results on the same subject (Seldin et al., 2011) by combining PAC-Bayesian analysis with Bernstein-type inequality for martingales. Such a combination is also of independent interest for studies of multiple simultaneously evolvi...

متن کامل

Evaluation and Analysis of the Performance of the EXP3 Algorithm in Stochastic Environments

EXP3 is a popular algorithm for adversarial multiarmed bandits, suggested and analyzed in this setting by Auer et al. [2002b]. Recently there was an increased interest in the performance of this algorithm in the stochastic setting, due to its new applications to stochastic multiarmed bandits with side information [Seldin et al., 2011] and to multiarmed bandits in the mixed stochastic-adversaria...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1105.2416  شماره 

صفحات  -

تاریخ انتشار 2011